EEG - Event Related Potentials (ERP) Detection
Difficulty Level:
Tags detect☁eeg☁erp☁p300

Event Related Potentials (ERP) can be triggered by either auditory, visual or somatosensory stimuli. One example is the P300 , which is a positive deflection about 300 ms after an odd event.

The Odd-Ball paradigm Paradigm is the method to trigger the P300 . The synchronisation of the EEG with the precise moment of stimuli start is important due to occurrence of evoked potentials between 100 and 900 ms after the onset of stimuli ( ref ).

This Jupyter Notebook explains how to detect a P300 using PLUX"s EEG single-channel sensor and a device for synchronisation of an acoustic stimuli (which was generated before) with the recorded data. Moreover the processing of the acquired EEG data is illustrated for the detection of the ERP for two test subjects.

Algorithm was tested for two subjects, however, to simplify the Jupyter Notebook , only data from Subject 1 is presented.


1 - Experimental Setup:

  1. Electrode/EEG Sensor Setup
  2. - Place the electrodes on the Cz/Pz position as well as one reference at M1 position behind the ear, connect them to the EEG sensors and fix them with a headband.
  3. Headband fixation
  4. - Connect the Audio output cable with the Headphones as well as the computer/device to play the stimuli (turn volume up to max and headphones down to min to receive good volume for optimal signal quality).
  5. Acoustic output, headphone and biosignalsplux Hub connector
  6. - Connect both EEG and Audio Output+Headphone with the biosignalsplux Hub and launch OpenSignals .

For more information on electrode positioning please refer to our Notebook on Electrode Positioning .
The Audio output cable (3) for connection with headphones as well as a computer and the biosignalsplux hub is available upon request.

1. 2. 3.
  1. Reducing the impact of visual stimulus
  2. - The subject relaxes and closes the eyes for noise reduction in the signal.
  3. Configuration of hardware through OpenSignals software
  4. - Turning on the data acquisition in OpenSignals (Acoustic Channel = CUSTOM, EEG Channel = EEG).
  5. Preparing accessories for generation of acoustic stimuli
  6. - Turn on the acoustic stimuli.

2 - Processing of Data:

2.1 - Importing relevant packages

In [1]:
# biosignalsnotebooks python package
import biosignalsnotebooks as bsnb

# Scientific packages
from numpy import loadtxt, mean, array, concatenate, load, zeros

2.2 Load data from signal samples library

In [2]:
# Load of data Subject 1
relative_file_path = "../../signal_samples/eeg_acoustic_a.h5"
data, header = bsnb.load(relative_file_path, get_header=True)

2.3 Check File Header Data

In [3]:
from sty import fg, rs
print(fg(98,195,238) + "\033[1mHeader\033[0m" + fg.rs)
print(header)
print(fg(98,195,238) + "\033[1mData Structure\033[0m" + fg.rs)
print(data)
Header
{'channels': array([1, 5]), 'comments': '', 'date': '2019-05-24', 'device': 'biosignalsplux', 'device connection': b'BTH00:07:80:D8:A8:82', 'device name': b'00:07:80:D8:A8:82', 'digital IO': array([0, 1]), 'firmware version': '0', 'resolution': array([16, 16]), 'sampling rate': 1000, 'sync interval': 2, 'time': '14:31:08', 'sensor': [b'RAW', b'RAW'], 'column labels': {1: 'channel_1', 5: 'channel_5'}}
Data Structure
{'CH1': array([32816, 32811, 32811, ..., 32812, 32806, 32815], dtype=uint32), 'CH5': array([30626, 30373, 30261, ..., 29183, 29101, 29319], dtype=uint32)}

2.4 Store information from file Header

In [4]:
#get information which is stored inside variables
ch1 = "CH1" # Channel1
ch5 = "CH5" # Channel5
sr = header["sampling rate"] # Sampling rate
resolution = header["resolution"][0] # Resolution (number of available bits)
device = header["device"]

2.5 - Store the desired data in an individual variable (for both subjects)

In [5]:
#RAW DATA
signal_acoustic = data[ch1]
signal_eeg = data[ch5]

2.6 - Convert the RAW data to values with a physical meaning (for the EEG in electric voltage | uV)

In [6]:
# Signal Samples Conversion
#EEG signal [Subject 1]:
signal_uv = bsnb.raw_to_phy("EEG", device, signal_eeg, resolution, option="uV") # Conversion to uV

#sound stimuli:
signal_ac = signal_acoustic - mean(signal_acoustic)

2.7 - Generate a time-axis

In [7]:
#EEG signal:
time_eeg = bsnb.generate_time(signal_uv)
#sound stimuli:
time_a = bsnb.generate_time(signal_acoustic)

2.8 - Plot Data of Subject 1 (our major example)

Considering that the following chart is intended to provide only a general overview of the data acquired, to decrease the memory demands a downsampling stage is applied.

In [8]:
# [Subject 1]
# Downsampling data.
time_eeg_down = time_eeg[::10]
signal_uv_down = signal_uv[::10]

#EEG signal:
bsnb.plot([time_eeg_down], [signal_uv_down], y_axis_label="Electric Tension (uV)",legend="EEG RAW",x_range=(0,310))
In [9]:
# Downsampling data.
time_a_down = time_a[::10]
signal_ac_down = signal_ac[::10]

#sound stimuli:
bsnb.plot([time_a_down], [signal_ac_down], y_axis_label="Value RAW",legend="Acoustic Stimuli RAW",x_range=(0,310))

2.9 - Filtering:

A) Acquired Acoustic Stimuli

In first place, the position linked to the start of the stimuli needs to be defined in order to remove unwanted parts of the EEG signal. This can be detected visually in the plot of the raw signal. Assign the start to the EEG signal as well:

In [10]:
# [Subject 1]
#define index of beginning of stimuli:
sound_begin = 46000 #test subject 1 (visual inspection)
eeg_begin = sound_begin

Filter the acquired sound stimuli signal using a lowpass with a cutoff frequency of 440 Hz, remove the shift from the baseline and rectify the signal:

In [11]:
# [Subject 1]
#acoustic signal filtering
filter_sound = bsnb.lowpass(signal_acoustic, f=440, order=2,fs=sr)
base_sound = filter_sound-mean(filter_sound)
rect_sound = abs (base_sound [sound_begin:])

Smooth the signal using a specified smoothing level:

In [12]:
# Smoothing level [Size of sliding window]
smoothing_level_perc = 2 # Percentage.
smoothing_level = int((smoothing_level_perc / 100) * sr)

#Smooth the signal
smooth_sound = bsnb.smooth(rect_sound, smoothing_level, window='hanning')

Generate a new time axis for the signal:

In [13]:
time_sound = bsnb.generate_time(smooth_sound,sr)

A.1) Generating a Threshold to create a precise stimuli vector:
Define a threshold with a high and a low percentage value for the regular and the odd stimuli sound, respectively:

In [14]:
#Threshold percentage values:
thresh_1_p = 0.55 #regular sound max
thresh_2_p = 0.015 #onset odd and regular sound
thresh_3_p = 0.25 #regular sound min

Find the maximum value of the regular sound and define the corresponding thresholds:

In [15]:
# [Subject 1]
#find max and define thresholds:
max_1 = 0

for i in range(0,len(rect_sound),1):
    if rect_sound[i] > max_1:
        max_1 = rect_sound[i]
        thresh_1 = thresh_1_p * max_1 
        thresh_2 = thresh_2_p * max_1
        thresh_3 = thresh_3_p * max_1
print("sound_max:",max_1,"sound_thresh_onset",thresh_2,"sound_thresh_reg_max",thresh_1,"sound_thresh_reg_min",thresh_3)
sound_max: 31252.68354811072 sound_thresh_onset 468.79025322166075 sound_thresh_reg_max 17188.975951460896 sound_thresh_reg_min 7813.17088702768

Create a stimuli vector for the start index of each regular and odd tone:

In [16]:
#stimuli vector for start index of regular and odd tones   
index_on=[]
index_on_Odd=[]

flag = -1500 #jump 1.5 sec
for index,i in enumerate (rect_sound):
    #check the following thresholds and store index, jump 1.5 sec for next value check
    if i > thresh_1 and index > 1500+flag:
        index_on.append(index) 
        flag = index   
    #check the following thresholds and store index, jump 1.5 sec for next value check    
    elif i > thresh_2 and i < thresh_1 and rect_sound[index+10] < thresh_1 and rect_sound[index+15] < thresh_1 and rect_sound[index+20] < thresh_1 and index > 1500+flag:
        index_on_Odd.append(index)
        flag = index

The precision of the threshold method is ~2 ms. (visual inspection)

Visualise the stimuli onset:

In [17]:
from bokeh.layouts import gridplot
from bokeh.plotting import show
#create list for thresholds to add them in the legend of the next plot
thresh1 = []
thresh2 = []   
thresh3 = []

for n in range(len(time_sound)):
    thresh1.append(thresh_1)
    thresh2.append(thresh_2)
    thresh3.append(thresh_3)

# Downsampling data.
time_sound_down = time_sound[::10]
rect_sound_down = rect_sound[::10]
smooth_sound_down = smooth_sound[::10]
thresh1_down = thresh1[::10]
thresh2_down = thresh2[::10]
    
#HIDE IN
fig_list = bsnb.plot([list(time_sound_down), list(time_sound_down)], [list(rect_sound_down), list(smooth_sound_down)],legend_label=[ "Rectified Signal","Smoothed Signal"], grid_plot=True,grid_lines=1, grid_columns=2, opensignals_style=True, show_plot=False, x_axis_label="Time (s)", y_axis_label=[ "Raw Data","Raw Data"], get_fig_list=True,x_range=(112,117))

bsnb.opensignals_color_pallet() # advance to the next color to improve contrast.
fig_list[0].line(time_sound_down,smooth_sound_down,legend_label="Smoothed Signal", line_color=bsnb.opensignals_color_pallet())
fig_list[1].line(time_sound_down, thresh1_down,legend_label="Threshold 1", line_color=bsnb.opensignals_color_pallet())
fig_list[1].line(time_sound_down, thresh2_down,legend_label="Threshold 2", line_color=bsnb.opensignals_color_pallet())
grid_plot = gridplot([[fig_list[0],fig_list[1]]], **bsnb.opensignals_kwargs("gridplot"))

show(grid_plot)
In [18]:
#find intersection of onset and offset signal with thresholds
intersect_on = []
intersect_index_on = []
intersect_odd = []
intersect_index_odd = []

for index, i in enumerate (smooth_sound):
    #return thres_1 value if signal meets thresh_1 for onset
    if i >= thresh_3 and smooth_sound[index-1] < thresh_3:
        intersect_on.append(thresh_3)
        intersect_index_on.append(index)
        flag = index
    #return thres_2 value if signal meets thresh_2 for offset
    elif (i >= thresh_2 and smooth_sound[index-1] < thresh_2 and smooth_sound[index+500] <thresh_3):
        intersect_odd.append(thresh_2)
        intersect_index_odd.append(index)
        flag = index

fig_list = bsnb.plot([list(time_sound)], [list(smooth_sound)], title=["Smoothed Signal"],legend_label=["Smoothed Signal"], grid_plot=True,grid_lines=1, grid_columns=2,  opensignals_style=True,show_plot=False,  x_axis_label="Time (s)", y_axis_label="Smooth_EMG",x_range=(128, 134),get_fig_list=True)
fig_list2 = bsnb.plot([list(time_sound)], [list(signal_ac)[46000:]], title=["Original Signal"], legend_label=["Original Signal"],grid_plot=True, grid_lines=1, grid_columns=2,  opensignals_style=True, show_plot=False, x_axis_label="Time (s)", y_axis_label= "EEG (uV)", get_fig_list=True,x_range=(128,134))


fig_list[0].line(time_sound, thresh3,legend_label="Threshold 3", line_color=bsnb.opensignals_color_pallet())
fig_list[0].line(time_sound, thresh2,legend_label="Threshold 2", line_color=bsnb.opensignals_color_pallet())
fig_list[0].circle(array(intersect_index_on)/sr,intersect_on, size=10, color=bsnb.opensignals_color_pallet(),
                   legend_label="Onset Reg")
fig_list[0].circle(array(intersect_index_odd)/sr,intersect_odd, size=10, color=bsnb.opensignals_color_pallet(),
                   legend_label="Onset Odd")
fig_list2[0].circle(array(intersect_index_on)/sr,intersect_on, size=10, color=bsnb.opensignals_color_pallet(),
                   legend_label="Onset Reg")
fig_list2[0].circle(array(intersect_index_odd)/sr,intersect_odd, size=10, color=bsnb.opensignals_color_pallet(),
                   legend_label="Onset Odd")
grid_plot = gridplot([[fig_list[0], fig_list2[0]]], **bsnb.opensignals_kwargs("gridplot"))

show(grid_plot)

B) Filtering of EEG Signal - window wise
Bandpass filtering of the signal with cutoff frequencies of 0.1 and 40 Hz for noise reduction:

In [19]:
# [Subject 1]
signal_neww=[]

#filter the signal for each window after onset of stimuli
signal_uv = signal_uv[eeg_begin:] # Select only a specific segment of the signal.
for n in range(len(index_on)-1):
    new_signal=signal_uv[index_on[n]: index_on[n+1]]
    signal_new=array(new_signal)-mean(array(new_signal))

    # Bandpass with cutoff frequencies: f1=0.1Hz and f2=40Hz:
    filter_signal = bsnb.bandpass(signal_new, f1=0.1,f2=40, order=2,fs=sr) 
    #filter_signal = signal_new
    signal_neww.append(filter_signal)

index_on_Odd_new = array(index_on_Odd)-index_on[0]
index_on_new = array(index_on)-index_on[0]
signal = concatenate(signal_neww)

Generate the time windows after stimuli onset:

In [20]:
# [Subject 1]
#Odd and regular Signal time windows overlap where a time window of 1000 ms after start of tone is chosen
time_window = 1000 #ms

#function to window the eeg signal into time windows of x ms after each regular / odd tone
windows_Odd=[]
for n in range(len(index_on_Odd_new)-1):
    windows_Odd.append(signal[index_on_Odd_new[n]:index_on_Odd_new[n]+time_window]) 


windows_reg=[]
for n in range(len(index_on_new)-1):
    windows_reg.append(signal[index_on_new[n]:index_on_new[n]+time_window]) 

Applying a mean_wave function to the signal windows to average over all windows for odd and regular tone responses to cancel out EEG noise:

Due to noise, the small changes in EEG amplitude change can not be seen and vanish in the signal. Averaging over each time window, starting after each known stimuli onset, enables the detection of ERP because the EEG noise and the ERP do not correlate to each other ref .

In [21]:
# [Subject 1]
#Apply the mean on all waves after the odd tone
mean_signal = bsnb.mean_wave(windows_Odd)
#Apply the mean on all waves after the regular tone
mean_signal_reg = bsnb.mean_wave(windows_reg)

2.10 - Visualise the P300:

In [22]:
time_t = bsnb.generate_time(mean_signal,sr)

fig_list1 = bsnb.plot([list(time_t)], [list(mean_signal)], title=["Subject 1"],legend_label=["target (Odd Stimuli)"], grid_plot=True, grid_lines=1, grid_columns=2,  opensignals_style=True, show_plot=False, x_axis_label="Time (s)", y_axis_label= "EEG (uV)", get_fig_list=True,x_range=(0.175,0.40),y_range=(-4.5,5.5))
fig_list1[0].line(time_t, mean_signal_reg,legend_label="non-target (Regular Stimuli)",line_dash=[1,2],line_width=6)

grid_plot = gridplot([[fig_list1[0]]], **bsnb.opensignals_kwargs("gridplot"))

show(grid_plot)

As it can be seen in the Figures above, a different EEG response appears after Odd and regular acoustic stimuli. In the time window of about 300 ms after the stimuli in Subject 1 an averaged response with a max value of about 2-6 uV can be observed after the odd stimuli whereas the averaged response with a max value for the regular stimuli is around 0-2 uV .

We hope that you have enjoyed this guide. biosignalsnotebooks is an environment in continuous expansion, so don"t stop your journey and learn more with the remaining Notebooks !

In [23]:
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
.................... CSS Style Applied to Jupyter Notebook .........................
Out[23]: